Your browser doesn't support javascript.
loading
Show: 20 | 50 | 100
Results 1 - 20 de 36
Filter
1.
J Med Internet Res ; 26: e55779, 2024 Apr 09.
Article in English | MEDLINE | ID: mdl-38593431

ABSTRACT

Practitioners of digital health are familiar with disjointed data environments that often inhibit effective communication among different elements of the ecosystem. This fragmentation leads in turn to issues such as inconsistencies in services versus payments, wastage, and notably, care delivered being less than best-practice. Despite the long-standing recognition of interoperable data as a potential solution, efforts in achieving interoperability have been disjointed and inconsistent, resulting in numerous incompatible standards, despite the widespread agreement that fewer standards would enhance interoperability. This paper introduces a framework for understanding health care data needs, discussing the challenges and opportunities of open data standards in the field. It emphasizes the necessity of acknowledging diverse data standards, each catering to specific viewpoints and needs, while proposing a categorization of health care data into three domains, each with its distinct characteristics and challenges, along with outlining overarching design requirements applicable to all domains and specific requirements unique to each domain.


Subject(s)
Delivery of Health Care , Humans
2.
Learn Health Syst ; 7(4): e10387, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37860058

ABSTRACT

Introduction: Medical knowledge is complex and constantly evolving, making it challenging to disseminate and retrieve effectively. To address these challenges, researchers are exploring the use of formal knowledge representations that can be easily interpreted by computers. Methods: Evidence Hub is a new, free, online platform that hosts computable clinical knowledge in the form of "Knowledge Objects". These objects represent various types of computer-interpretable knowledge. The platform includes features that encourage advancing medical knowledge, such as public discussion threads for civil discourse about each Knowledge Object, thus building communities of interest that can form and reach consensus on the correctness, applicability, and proper use of the object. Knowledge Objects are maintained by volunteers and published on Evidence Hub under GPL 2.0. Peer review and quality assurance are provided by volunteers. Results: Users can explore Evidence Hub and participate in discussions using a web browser. An application programming interface allows applications to register themselves as handlers of specific object types and provide editing and execution capabilities for particular object types. Conclusions: By providing a platform for computable clinical knowledge and fostering discussion and collaboration, Evidence Hub improves the dissemination and use of medical knowledge.

3.
Learn Health Syst ; 7(4): e10388, 2023 Oct.
Article in English | MEDLINE | ID: mdl-37860059

ABSTRACT

Introduction: Quality indicators play an essential role in a learning health system. They help healthcare providers to monitor the quality and safety of care delivered and to identify areas for improvement. Clinical quality indicators, therefore, need to be based on real world data. Generating reliable and actionable data routinely is challenging. Healthcare data are often stored in different formats and use different terminologies and coding systems, making it difficult to generate and compare indicator reports from different sources. Methods: The Observational Health Sciences and Informatics community maintains the Observational Medical Outcomes Partnership Common Data Model (OMOP). This is an open data standard providing a computable and interoperable format for real world data. We implemented a Computable Biomedical Knowledge Object (CBK) in the Piano Platform based on OMOP. The CBK calculates an inpatient quality indicator and was illustrated using synthetic electronic health record (EHR) data in the open OMOP standard. Results: The CBK reported the in-hospital mortality of patients admitted for acute myocardial infarction (AMI) for the synthetic EHR dataset and includes interactive visualizations and the results of calculations. Value sets composed of OMOP concept codes for AMI and comorbidities used in the indicator calculation were also created. Conclusion: Computable biomedical knowledge (CBK) objects that operate on OMOP data can be reused across datasets that conform to OMOP. With OMOP being a widely used interoperability standard, quality indicators embedded in CBKs can accelerate the generation of evidence for targeted quality and safety management, improving care to benefit larger populations.

4.
Syst Rev ; 8(1): 143, 2019 06 18.
Article in English | MEDLINE | ID: mdl-31215463

ABSTRACT

BACKGROUND: Although many aspects of systematic reviews use computational tools, systematic reviewers have been reluctant to adopt machine learning tools. DISCUSSION: We discuss that the potential reason for the slow adoption of machine learning tools into systematic reviews is multifactorial. We focus on the current absence of trust in automation and set-up challenges as major barriers to adoption. It is important that reviews produced using automation tools are considered non-inferior or superior to current practice. However, this standard will likely not be sufficient to lead to widespread adoption. As with many technologies, it is important that reviewers see "others" in the review community using automation tools. Adoption will also be slow if the automation tools are not compatible with workflows and tasks currently used to produce reviews. Many automation tools being developed for systematic reviews mimic classification problems. Therefore, the evidence that these automation tools are non-inferior or superior can be presented using methods similar to diagnostic test evaluations, i.e., precision and recall compared to a human reviewer. However, the assessment of automation tools does present unique challenges for investigators and systematic reviewers, including the need to clarify which metrics are of interest to the systematic review community and the unique documentation challenges for reproducible software experiments. CONCLUSION: We discuss adoption barriers with the goal of providing tool developers with guidance as to how to design and report such evaluations and for end users to assess their validity. Further, we discuss approaches to formatting and announcing publicly available datasets suitable for assessment of automation technologies and tools. Making these resources available will increase trust that tools are non-inferior or superior to current practice. Finally, we identify that, even with evidence that automation tools are non-inferior or superior to current practice, substantial set-up challenges remain for main stream integration of automation into the systematic review process.


Subject(s)
Machine Learning , Systematic Reviews as Topic , Humans , Evidence-Based Practice , Reproducibility of Results , Trust
5.
Syst Rev ; 8(1): 57, 2019 02 20.
Article in English | MEDLINE | ID: mdl-30786933

ABSTRACT

The third meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 17-18 October 2017 in London, England. ICASR is an interdisciplinary group whose goal is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. The group seeks to facilitate the development and widespread acceptance of automated techniques for systematic reviews. The meeting's conclusion was that the most pressing needs at present are to develop approaches for validating currently available tools and to provide increased access to curated corpora that can be used for validation. To that end, ICASR's short-term goals in 2018-2019 are to propose and publish protocols for key tasks in systematic reviews and to develop an approach for sharing curated corpora for validating the automation of the key tasks.


Subject(s)
Goals , Search Engine , Systematic Reviews as Topic , Workflow , Automation/methods , Humans
6.
Intensive Care Med Exp ; 6(1): 19, 2018 Jul 27.
Article in English | MEDLINE | ID: mdl-30054764

ABSTRACT

This study examines the impact of cefepime and APP-ß (antipseudomonal penicillin/ ß-lactamase inhibitor combinations) on Gram-negative bacterial colonization and resistance in two Australian ICUs. While resistance did not cumulatively increase, cefepime (but not APP-ß treatment) was associated with acquisition of antibiotic resistant Enterobacteriaceae, consistent with an ecological effect. Analysis of the resident gut E. coli population in a subset of patients showed an increase in markers of horizontal gene transfer after cefepime exposure that helps explain the increase in APP-ß resistance and reminds us that unmeasured impacts on the microbiome are key outcome determinants that need to be fully explored.

7.
Syst Rev ; 7(1): 77, 2018 05 19.
Article in English | MEDLINE | ID: mdl-29778096

ABSTRACT

Systematic reviews (SR) are vital to health care, but have become complicated and time-consuming, due to the rapid expansion of evidence to be synthesised. Fortunately, many tasks of systematic reviews have the potential to be automated or may be assisted by automation. Recent advances in natural language processing, text mining and machine learning have produced new algorithms that can accurately mimic human endeavour in systematic review activity, faster and more cheaply. Automation tools need to be able to work together, to exchange data and results. Therefore, we initiated the International Collaboration for the Automation of Systematic Reviews (ICASR), to successfully put all the parts of automation of systematic review production together. The first meeting was held in Vienna in October 2015. We established a set of principles to enable tools to be developed and integrated into toolkits.This paper sets out the principles devised at that meeting, which cover the need for improvement in efficiency of SR tasks, automation across the spectrum of SR tasks, continuous improvement, adherence to high quality standards, flexibility of use and combining components, the need for a collaboration and varied skills, the desire for open source, shared code and evaluation, and a requirement for replicability through rigorous and open evaluation.Automation has a great potential to improve the speed of systematic reviews. Considerable work is already being done on many of the steps involved in a review. The 'Vienna Principles' set out in this paper aim to guide a more coordinated effort which will allow the integration of work by separate teams and build on the experience, code and evaluations done by the many teams working across the globe.


Subject(s)
Automation/standards , Data Accuracy , Systematic Reviews as Topic , Algorithms , Automation/methods , Cooperative Behavior , Data Mining , Humans , Machine Learning , Natural Language Processing
8.
Syst Rev ; 7(1): 64, 2018 04 25.
Article in English | MEDLINE | ID: mdl-29695296

ABSTRACT

BACKGROUND: Screening candidate studies for inclusion in a systematic review is time-consuming when conducted manually. Automation tools could reduce the human effort devoted to screening. Existing methods use supervised machine learning which train classifiers to identify relevant words in the abstracts of candidate articles that have previously been labelled by a human reviewer for inclusion or exclusion. Such classifiers typically reduce the number of abstracts requiring manual screening by about 50%. METHODS: We extracted four key characteristics of observational studies (population, exposure, confounders and outcomes) from the text of titles and abstracts for all articles retrieved using search strategies from systematic reviews. Our screening method excluded studies if they did not meet a predefined set of characteristics. The method was evaluated using three systematic reviews. Screening results were compared to the actual inclusion list of the reviews. RESULTS: The best screening threshold rule identified studies that mentioned both exposure (E) and outcome (O) in the study abstract. This screening rule excluded 93.7% of retrieved studies with a recall of 98%. CONCLUSIONS: Filtering studies for inclusion in a systematic review based on the detection of key study characteristics in abstracts significantly outperformed standard approaches to automated screening and appears worthy of further development and evaluation.


Subject(s)
Automation , Biomedical Research , Machine Learning , Systematic Reviews as Topic , Humans , Automation/methods
9.
Syst Rev ; 7(1): 3, 2018 01 09.
Article in English | MEDLINE | ID: mdl-29316980

ABSTRACT

The second meeting of the International Collaboration for Automation of Systematic Reviews (ICASR) was held 3-4 October 2016 in Philadelphia, Pennsylvania, USA. ICASR is an interdisciplinary group whose aim is to maximize the use of technology for conducting rapid, accurate, and efficient systematic reviews of scientific evidence. Having automated tools for systematic review should enable more transparent and timely review, maximizing the potential for identifying and translating research findings to practical application. The meeting brought together multiple stakeholder groups including users of summarized research, methodologists who explore production processes and systematic review quality, and technologists such as software developers, statisticians, and vendors. This diversity of participants was intended to ensure effective communication with numerous stakeholders about progress toward automation of systematic reviews and stimulate discussion about potential solutions to identified challenges. The meeting highlighted challenges, both simple and complex, and raised awareness among participants about ongoing efforts by various stakeholders. An outcome of this forum was to identify several short-term projects that participants felt would advance the automation of tasks in the systematic review workflow including (1) fostering better understanding about available tools, (2) developing validated datasets for testing new tools, (3) determining a standard method to facilitate interoperability of tools such as through an application programming interface or API, and (4) establishing criteria to evaluate the quality of tools' output. ICASR 2016 provided a beneficial forum to foster focused discussion about tool development and resources and reconfirm ICASR members' commitment toward systematic reviews' automation.


Subject(s)
Systematic Reviews as Topic , Technology Assessment, Biomedical , Humans , Automation/methods , Cooperative Behavior , Health Information Interoperability , Information Storage and Retrieval/methods , Information Storage and Retrieval/standards
10.
J Antimicrob Chemother ; 73(4): 883-890, 2018 04 01.
Article in English | MEDLINE | ID: mdl-29373760

ABSTRACT

Background: Multiresistance in Gram-negative bacteria is often due to acquisition of several different antibiotic resistance genes, each associated with a different mobile genetic element, that tend to cluster together in complex conglomerations. Accurate, consistent annotation of resistance genes, the boundaries and fragments of mobile elements, and signatures of insertion, such as DR, facilitates comparative analysis of complex multiresistance regions and plasmids to better understand their evolution and how resistance genes spread. Objectives: To extend the Repository of Antibiotic resistance Cassettes (RAC) web site, which includes a database of 'features', and the Attacca automatic DNA annotation system, to encompass additional resistance genes and all types of associated mobile elements. Methods: Antibiotic resistance genes and mobile elements were added to RAC, from existing registries where possible. Attacca grammars were extended to accommodate the expanded database, to allow overlapping features to be annotated and to identify and annotate features such as composite transposons and DR. Results: The Multiple Antibiotic Resistance Annotator (MARA) database includes antibiotic resistance genes and selected mobile elements from Gram-negative bacteria, distinguishing important variants. Sequences can be submitted to the MARA web site for annotation. A list of positions and orientations of annotated features, indicating those that are truncated, DR and potential composite transposons is provided for each sequence, as well as a diagram showing annotated features approximately to scale. Conclusions: The MARA web site (http://mara.spokade.com) provides a comprehensive database for mobile antibiotic resistance in Gram-negative bacteria and accurately annotates resistance genes and associated mobile elements in submitted sequences to facilitate comparative analysis.


Subject(s)
Automation, Laboratory/methods , Drug Resistance, Bacterial , Gram-Negative Bacteria/drug effects , Gram-Negative Bacteria/genetics , Interspersed Repetitive Sequences , Molecular Sequence Annotation/methods , Databases, Nucleic Acid , Internet
11.
BMC Health Serv Res ; 17(1): 502, 2017 07 21.
Article in English | MEDLINE | ID: mdl-28732500

ABSTRACT

BACKGROUND: Clinical quality indicators are used to monitor the performance of healthcare services and should wherever possible be based on research evidence. Little is known however about the extent to which indicators in common use are based on research. The objective of this study is to measure the extent to which clinical quality indicators used in asthma management in children with outcome measurements can be linked to results in randomised controlled clinical trial (RCT) reports. This work is part of a broader research program to trial methods that improve the efficiency and accuracy of indicator development. METHODS: National-level indicators for asthma management in children were extracted from the National Quality Measures Clearinghouse database and the National Institute for Health and Care Excellence quality standards by two independent appraisers. Outcome measures were extracted from all published English language RCT reports for asthma management in children below the age of 12 published between 2005 and 2014. The two sets were then linked by manually mapping both to a common set of Unified Medical Language System (UMLS) concepts. RESULTS: The analysis identified 39 indicators and 562 full text RCTs dealing with asthma management in children. About 95% (37/39) of the indicators could be linked to RCT outcome measures. CONCLUSIONS: It is possible to identify relevant RCT reports for the majority of indicators used to assess the quality of asthma management in childhood. The methods reported here could be automated to more generally support assessment of candidate indicators against the research evidence.


Subject(s)
Asthma/therapy , Outcome Assessment, Health Care/methods , Quality Indicators, Health Care , Child , Child, Preschool , Health Services , Humans , Infant , Randomized Controlled Trials as Topic , Unified Medical Language System
12.
Int J Qual Health Care ; 29(4): 571-578, 2017 Aug 01.
Article in English | MEDLINE | ID: mdl-28651340

ABSTRACT

OBJECTIVE: Quality improvement of health care requires robust measurable indicators to track performance. However identifying which indicators are supported by strong clinical evidence, typically from clinical trials, is often laborious. This study tests a novel method for automatically linking indicators to clinical trial registrations. DESIGN: A set of 522 quality of care indicators for 22 common conditions drawn from the CareTrack study were automatically mapped to outcome measures reported in 13 971 trials from ClinicalTrials.gov. INTERVENTION: Text mining methods extracted phrases mentioning indicators and outcome phrases, and these were compared using the Levenshtein edit distance ratio to measure similarity. MAIN OUTCOME MEASURE: Number of care indicators that mapped to outcome measures in clinical trials. RESULTS: While only 13% of the 522 CareTrack indicators were thought to have Level I or II evidence behind them, 353 (68%) could be directly linked to randomized controlled trials. Within these 522, 50 of 70 (71%) Level I and II evidence-based indicators, and 268 of 370 (72%) Level V (consensus-based) indicators could be linked to evidence. Of the indicators known to have evidence behind them, only 5.7% (4 of 70) were mentioned in the trial reports but were missed by our method. CONCLUSIONS: We automatically linked indicators to clinical trial registrations with high precision. Whilst the majority of quality indicators studied could be directly linked to research evidence, a small portion could not and these require closer scrutiny. It is feasible to support the process of indicator development using automated methods to identify research evidence.


Subject(s)
Data Mining/methods , Quality Indicators, Health Care , Randomized Controlled Trials as Topic , Humans , Outcome Assessment, Health Care
13.
J Biomed Inform ; 70: 27-34, 2017 06.
Article in English | MEDLINE | ID: mdl-28455150

ABSTRACT

INTRODUCTION: Most data extraction efforts in epidemiology are focused on obtaining targeted information from clinical trials. In contrast, limited research has been conducted on the identification of information from observational studies, a major source for human evidence in many fields, including environmental health. The recognition of key epidemiological information (e.g., exposures) through text mining techniques can assist in the automation of systematic reviews and other evidence summaries. METHOD: We designed and applied a knowledge-driven, rule-based approach to identify targeted information (study design, participant population, exposure, outcome, confounding factors, and the country where the study was conducted) from abstracts of epidemiological studies included in several systematic reviews of environmental health exposures. The rules were based on common syntactical patterns observed in text and are thus not specific to any systematic review. To validate the general applicability of our approach, we compared the data extracted using our approach versus hand curation for 35 epidemiological study abstracts manually selected for inclusion in two systematic reviews. RESULTS: The returned F-score, precision, and recall ranged from 70% to 98%, 81% to 100%, and 54% to 97%, respectively. The highest precision was observed for exposure, outcome and population (100%) while recall was best for exposure and study design with 97% and 89%, respectively. The lowest recall was observed for the population (54%), which also had the lowest F-score (70%). CONCLUSION: The generated performance of our text-mining approach demonstrated encouraging results for the identification of targeted information from observational epidemiological study abstracts related to environmental exposures. We have demonstrated that rules based on generic syntactic patterns in one corpus can be applied to other observational study design by simple interchanging the dictionaries aiming to identify certain characteristics (i.e., outcomes, exposures). At the document level, the recognised information can assist in the selection and categorization of studies included in a systematic review.


Subject(s)
Automation , Data Mining , Review Literature as Topic
14.
J Biomed Inform ; 59: 308-15, 2016 Feb.
Article in English | MEDLINE | ID: mdl-26732996

ABSTRACT

OBJECTIVE: To introduce and evaluate a method that uses electronic medical record (EMR) data to measure the effects of computer system downtime on clinical processes associated with pathology testing and results reporting. MATERIALS AND METHODS: A matched case-control design was used to examine the effects of five downtime events over 11-months, ranging from 5 to 300min. Four indicator tests representing different laboratory workflows were selected to measure delays and errors: potassium, haemoglobon, troponin and activated partial thromboplastin time. Tests exposed to a downtime were matched to tests during unaffected control periods by test type, time of day and day of week. Measures included clinician read time (CRT), laboratory turnaround time (LTAT), and rates of missed reads, futile searches, duplicate orders, and missing test results. RESULTS: The effects of downtime varied with the type of IT problem. When clinicians could not logon to a results reporting system for 17-min, the CRT for potassium and haemoglobon tests was five (10.3 vs. 2.0days) and six times (13.4 vs. 2.1days) longer than control (p=0.01-0.04; p=0.0001-0.003). Clinician follow-up of tests was also delayed by another downtime involving a power outage with a small effect. In contrast, laboratory processing of troponin tests was unaffected by network services and routing problems. Errors including missed reads, futile searches, duplicate orders and missing test results could not be examined because the sample size of affected tests was not sufficient for statistical testing. CONCLUSION: This study demonstrates the feasibility of using routinely collected EMR data with a matched case-control design to measure the effects of downtime on clinical processes. Even brief system downtimes may impact patient care. The methodology has potential to be applied to other clinical processes with established workflows where tasks are pre-defined such as medications management.


Subject(s)
Computer Communication Networks/standards , Equipment Failure/statistics & numerical data , Medical Informatics/standards , Patient Safety , Case-Control Studies , Humans , Laboratories, Hospital , Workflow
15.
BMJ Open ; 5(9): e008819, 2015 Sep 08.
Article in English | MEDLINE | ID: mdl-26351189

ABSTRACT

INTRODUCTION: Clinical quality indicators are necessary to monitor the performance of healthcare services. The development of indicators should, wherever possible, be based on research evidence to minimise the risk of bias which may be introduced during their development, because of logistic, ethical or financial constraints alone. The development of automated methods to identify the evidence base for candidate indicators should improve the process of indicator development. The objective of this study is to explore the relationship between clinical quality indicators for asthma management in children with outcome and process measurements extracted from randomised controlled clinical trial reports. METHODS AND ANALYSIS: National-level indicators for asthma management in children will be extracted from the National Quality Measures Clearinghouse (NQMC) database and the National Institute for Health and Care Excellence (NICE) quality standards. Outcome measures will be extracted from published English language randomised controlled trial (RCT) reports for asthma management in children aged below 12 years. The two sets of measures will be compared to assess any overlap. The study will provide insights into the relationship between clinical quality indicators and measurements in RCTs. This study will also yield a list of measurements used in RCTs for asthma management in children, and will find RCT evidence for indicators used in practice. ETHICS AND DISSEMINATION: Ethical approval is not necessary because this study will not include patient data. Findings will be disseminated through peer-reviewed publications.


Subject(s)
Asthma/therapy , Biomedical Research , Delivery of Health Care/standards , Disease Management , Outcome Assessment, Health Care , Quality Indicators, Health Care , Child , Humans , Research Design
16.
Stud Health Technol Inform ; 216: 761-5, 2015.
Article in English | MEDLINE | ID: mdl-26262154

ABSTRACT

The manner in which people preferentially interact with others like themselves suggests that information about social connections may be useful in the surveillance of opinions for public health purposes. We examined if social connection information from tweets about human papillomavirus (HPV) vaccines could be used to train classifiers that identify anti-vaccine opinions. From 42,533 tweets posted between October 2013 and March 2014, 2,098 were sampled at random and two investigators independently identified anti-vaccine opinions. Machine learning methods were used to train classifiers using the first three months of data, including content (8,261 text fragments) and social connections (10,758 relationships). Connection-based classifiers performed similarly to content-based classifiers on the first three months of training data, and performed more consistently than content-based classifiers on test data from the subsequent three months. The most accurate classifier achieved an accuracy of 88.6% on the test data set, and used only social connection features. Information about how people are connected, rather than what they write, may be useful for improving public health surveillance methods on Twitter.


Subject(s)
Data Mining/methods , Papillomavirus Vaccines , Public Opinion , Social Media/statistics & numerical data , Vaccination/psychology , Attitude to Health , Natural Language Processing , Social Support
17.
J Clin Epidemiol ; 68(1): 87-93, 2015 Jan.
Article in English | MEDLINE | ID: mdl-25450452

ABSTRACT

OBJECTIVES: To examine the use of supervised machine learning to identify biases in evidence selection and determine if citation information can predict favorable conclusions in reviews about neuraminidase inhibitors. STUDY DESIGN AND SETTING: Reviews of neuraminidase inhibitors published during January 2005 to May 2013 were identified by searching PubMed. In a blinded evaluation, the reviews were classified as favorable if investigators agreed that they supported the use of neuraminidase inhibitors for prophylaxis or treatment of influenza. Reference lists were used to identify all unique citations to primary articles. Three classification methods were tested for their ability to predict favorable conclusions using only citation information. RESULTS: Citations to 4,574 articles were identified in 152 reviews of neuraminidase inhibitors, and 93 (61%) of these reviews were graded as favorable. Primary articles describing drug resistance were among the citations that were underrepresented in favorable reviews. The most accurate classifier predicted favorable conclusions with 96.2% accuracy, using citations to only 24 of 4,574 articles. CONCLUSION: Favorable conclusions in reviews about neuraminidase inhibitors can be predicted using only information about the articles they cite. The approach highlights how evidence exclusion shapes conclusions in reviews and provides a method to evaluate citation practices in a corpus of reviews.


Subject(s)
Artificial Intelligence , Enzyme Inhibitors/standards , Enzyme Inhibitors/therapeutic use , Neuraminidase/antagonists & inhibitors , Publications/statistics & numerical data , Selection Bias , Humans , Influenza, Human/drug therapy , Research Design , Review Literature as Topic
18.
J Med Internet Res ; 16(10): e223, 2014 Oct 01.
Article in English | MEDLINE | ID: mdl-25274020

ABSTRACT

BACKGROUND: Snowballing involves recursively pursuing relevant references cited in the retrieved literature and adding them to the search results. Snowballing is an alternative approach to discover additional evidence that was not retrieved through conventional search. Snowballing's effectiveness makes it best practice in systematic reviews despite being time-consuming and tedious. OBJECTIVE: Our goal was to evaluate an automatic method for citation snowballing's capacity to identify and retrieve the full text and/or abstracts of cited articles. METHODS: Using 20 review articles that contained 949 citations to journal or conference articles, we manually searched Microsoft Academic Search (MAS) and identified 78.0% (740/949) of the cited articles that were present in the database. We compared the performance of the automatic citation snowballing method against the results of this manual search, measuring precision, recall, and F1 score. RESULTS: The automatic method was able to correctly identify 633 (as proportion of included citations: recall=66.7%, F1 score=79.3%; as proportion of citations in MAS: recall=85.5%, F1 score=91.2%) of citations with high precision (97.7%), and retrieved the full text or abstract for 490 (recall=82.9%, precision=92.1%, F1 score=87.3%) of the 633 correctly retrieved citations. CONCLUSIONS: The proposed method for automatic citation snowballing is accurate and is capable of obtaining the full texts or abstracts for a substantial proportion of the scholarly citations in review articles. By automating the process of citation snowballing, it may be possible to reduce the time and effort of common evidence surveillance tasks such as keeping trial registries up to date and conducting systematic reviews.


Subject(s)
Information Storage and Retrieval/methods , Medical Informatics/methods , Databases, Factual , Evidence-Based Medicine , Humans , Registries , Review Literature as Topic
19.
Ann Intern Med ; 161(7): 513-8, 2014 Oct 07.
Article in English | MEDLINE | ID: mdl-25285542

ABSTRACT

BACKGROUND: Industry funding and financial conflicts of interest may contribute to bias in the synthesis and interpretation of scientific evidence. OBJECTIVE: To examine the association between financial conflicts of interest and characteristics of systematic reviews of neuraminidase inhibitors. DESIGN: Retrospective analysis. SETTING: Reviews that examined the use of neuraminidase inhibitors in the prophylaxis or treatment of influenza, were published between January 2005 and May 2014, and used a systematic search protocol. MEASUREMENTS: Two investigators blinded to all information regarding the review authors independently assessed the presentation of evidence on the use of neuraminidase inhibitors as favorable or not favorable. Financial conflicts of interest were identified using the index reviews, other publications, and Web-based searches. Associations between financial conflicts of interest, favorability assessments, and presence of critical appraisals of evidence quality were analyzed. RESULTS: Twenty-six systematic reviews were identified, of which 13 examined prophylaxis and 24 examined treatment, accounting for 37 distinct assessments. Among assessments associated with a financial conflict of interest, 7 of 8 (88%) were classified as favorable, compared with 5 of 29 (17%) among those without a financial conflict of interest. Reviewers without financial conflicts of interest were more likely to include statements about the quality of the primary studies than those with financial conflicts of interest. LIMITATIONS: The heterogeneity in populations and outcomes examined in the reviews precluded analysis of the contribution of selective inclusion of evidence on the discordance of the assessments made in the reviews. Many of the systematic reviews had overlapping authorship. CONCLUSION: Reviewers with financial conflicts of interest may be more likely to present evidence about neuraminidase inhibitors in a favorable manner and recommend the use of these drugs than reviewers without financial conflicts of interest. PRIMARY FUNDING SOURCE: Australian National Health and Medical Research Council.


Subject(s)
Antiviral Agents/therapeutic use , Conflict of Interest , Drug Industry , Enzyme Inhibitors/therapeutic use , Influenza, Human/drug therapy , Neuraminidase/antagonists & inhibitors , Research Support as Topic , Humans , Retrospective Studies , Review Literature as Topic
20.
Syst Rev ; 3: 74, 2014 Jul 09.
Article in English | MEDLINE | ID: mdl-25005128

ABSTRACT

Systematic reviews, a cornerstone of evidence-based medicine, are not produced quickly enough to support clinical practice. The cost of production, availability of the requisite expertise and timeliness are often quoted as major contributors for the delay. This detailed survey of the state of the art of information systems designed to support or automate individual tasks in the systematic review, and in particular systematic reviews of randomized controlled clinical trials, reveals trends that see the convergence of several parallel research projects.We surveyed literature describing informatics systems that support or automate the processes of systematic review or each of the tasks of the systematic review. Several projects focus on automating, simplifying and/or streamlining specific tasks of the systematic review. Some tasks are already fully automated while others are still largely manual. In this review, we describe each task and the effect that its automation would have on the entire systematic review process, summarize the existing information system support for each task, and highlight where further research is needed for realizing automation for the task. Integration of the systems that automate systematic review tasks may lead to a revised systematic review workflow. We envisage the optimized workflow will lead to system in which each systematic review is described as a computer program that automatically retrieves relevant trials, appraises them, extracts and synthesizes data, evaluates the risk of bias, performs meta-analysis calculations, and produces a report in real time.


Subject(s)
Electronic Data Processing , Information Storage and Retrieval , Review Literature as Topic
SELECTION OF CITATIONS
SEARCH DETAIL
...